黑人生活问题(BLM)是一项分散的社会运动,抗议对黑人个人和社区的暴力行为,重点是警察暴力。 2020年,艾哈迈德·阿贝里(Ahmaud Arbery),布雷纳·泰勒(Breonna Taylor)和乔治·弗洛伊德(George Floyd)的杀害后,该运动引起了人们的关注。#BlackLivesMatter社交媒体标签已经代表了基层运动,并以类似的标签来抗议BLM运动,例如#AllllivesMatter和#allllivesmatter和#allllivesmatter,以及#bluelivesmatter。我们介绍了来自100多个国家 /地区的1,300万用户的6390万推文的数据集,其中包含以下关键字之一:BlackLivesMatter,AlllivesMatter和BluelivesMatter。该数据集包含从2013年BLM运动开始到2021年的所有当前可用推文。我们总结了数据集并显示了使用BlackLivesMatter关键字和与反向运动相关的关键字的时间趋势。此外,对于每个关键字,我们创建并发布了一组潜在的Dirichlet分配(LDA)主题(即自动聚集了语义上共同共的单词的组),以帮助研究人员识别这三个关键字的语言模式。
translated by 谷歌翻译
Traditional approaches to RL have focused on learning decision policies directly from episodic decisions, while slowly and implicitly learning the semantics of compositional representations needed for generalization. While some approaches have been adopted to refine representations via auxiliary self-supervised losses while simultaneously learning decision policies, learning compositional representations from hand-designed and context-independent self-supervised losses (multi-view) still adapts relatively slowly to the real world, which contains many non-IID subspaces requiring rapid distribution shift in both time and spatial attention patterns at varying levels of abstraction. In contrast, supervised language model cascades have shown the flexibility to adapt to many diverse manifolds, and hints of self-learning needed for autonomous task transfer. However, to date, transfer methods for language models like few-shot learning and fine-tuning still require human supervision and transfer learning using self-learning methods has been underexplored. We propose a self-supervised loss policy called contrastive distillation which manifests latent variables with high mutual information with both source and target tasks from weights to tokens. We show how this outperforms common methods of transfer learning and suggests a useful design axis of trading off compute for generalizability for online transfer. Contrastive distillation is improved through sampling from memory and suggests a simple algorithm for more efficiently sampling negative examples for contrastive losses than random sampling.
translated by 谷歌翻译
Realistic synthetic image data rendered from 3D models can be used to augment image sets and train image classification semantic segmentation models. In this work, we explore how high quality physically-based rendering and domain randomization can efficiently create a large synthetic dataset based on production 3D CAD models of a real vehicle. We use this dataset to quantify the effectiveness of synthetic augmentation using U-net and Double-U-net models. We found that, for this domain, synthetic images were an effective technique for augmenting limited sets of real training data. We observed that models trained on purely synthetic images had a very low mean prediction IoU on real validation images. We also observed that adding even very small amounts of real images to a synthetic dataset greatly improved accuracy, and that models trained on datasets augmented with synthetic images were more accurate than those trained on real images alone. Finally, we found that in use cases that benefit from incremental training or model specialization, pretraining a base model on synthetic images provided a sizeable reduction in the training cost of transfer learning, allowing up to 90\% of the model training to be front-loaded.
translated by 谷歌翻译
Due to the low signal-to-noise ratio and limited resolution of functional MRI data, and the high complexity of natural images, reconstructing a visual stimulus from human brain fMRI measurements is a challenging task. In this work, we propose a novel approach for this task, which we call Cortex2Image, to decode visual stimuli with high semantic fidelity and rich fine-grained detail. In particular, we train a surface-based convolutional network model that maps from brain response to semantic image features first (Cortex2Semantic). We then combine this model with a high-quality image generator (Instance-Conditioned GAN) to train another mapping from brain response to fine-grained image features using a variational approach (Cortex2Detail). Image reconstructions obtained by our proposed method achieve state-of-the-art semantic fidelity, while yielding good fine-grained similarity with the ground-truth stimulus. Our code is available at: https://github.com/zijin-gu/meshconv-decoding.git.
translated by 谷歌翻译
Breast cancer is the second most common type of cancer in women in Canada and the United States, representing over 25% of all new female cancer cases. Neoadjuvant chemotherapy treatment has recently risen in usage as it may result in a patient having a pathologic complete response (pCR), and it can shrink inoperable breast cancer tumors prior to surgery so that the tumor becomes operable, but it is difficult to predict a patient's pathologic response to neoadjuvant chemotherapy. In this paper, we investigate the efficacy of leveraging learnt volumetric deep features from a newly introduced magnetic resonance imaging (MRI) modality called synthetic correlated diffusion imaging (CDI$^s$) for the purpose of pCR prediction. More specifically, we leverage a volumetric convolutional neural network to learn volumetric deep radiomic features from a pre-treatment cohort and construct a predictor based on the learnt features using the post-treatment response. As the first study to explore the utility of CDI$^s$ within a deep learning perspective for clinical decision support, we evaluated the proposed approach using the ACRIN-6698 study against those learnt using gold-standard imaging modalities, and found that the proposed approach can provide enhanced pCR prediction performance and thus may be a useful tool to aid oncologists in improving recommendation of treatment of patients. Subsequently, this approach to leverage volumetric deep radiomic features (which we name Cancer-Net BCa) can be further extended to other applications of CDI$^s$ in the cancer domain to further improve prediction performance.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
使用手动生成标签训练的卷积神经网络通常用于语义或实例分割。在精确的农业中,自动花探测方法使用监督模型和后处理技术,这些技术可能不会始终如一地表现为花朵的出现,并且数据采集条件有所不同。我们提出了一种自我监督的学习策略,以使用自动生成的伪标签来增强分割模型对不同花种物种的敏感性。我们采用数据增强和完善方法来提高模型预测的准确性。然后将增强的语义预测转换为全景伪标签,以迭代训练多任务模型。可以通过现有的后处理方法来完善自我监督的模型预测,以进一步提高其准确性。对多物种果树花数据集的评估表明,我们的方法的表现优于最先进的模型,而无需计算昂贵的后处理步骤,为花朵检测应用提供了新的基线。
translated by 谷歌翻译
公平被广泛认为是医疗保健道德的基础。在临床决策的背景下,它取决于智力的比较忠诚(基于证据或直观),指导每个患者的管理。尽管当代机器学习的个性化力量最近引起了人们的关注,但这种认知公平是在任何决策指导的背景下,无论是传统还是创新的。然而,目前没有一般的量化框架,更不用说保证了。在这里,我们根据模型的忠诚度来制定认知公平性,这些模型是对所学的多维表述评估的,这些身份的多维表示,旨在最大程度地提高人口的捕获多样性,从而引入了代表性道德模型校准的全面框架。我们证明了该框架在来自英国生物库的大规模多模式数据上的使用来得出人口的各种表示,量化模型绩效并提出了响应良好的补救。我们提供方法作为量化和确保医疗保健认知公平的原则解决方案,并在整个研究,临床和监管领域中进行了应用。
translated by 谷歌翻译
将信号与噪声分开的能力以及干净的抽象对智能至关重要。有了这种能力,人类可以在不考虑所有可能的滋扰因素的情况下有效执行现实世界任务。人造代理可以做同样的事情?当噪音时,代理可以安全地丢弃什么样的信息?在这项工作中,我们根据可控性和与奖励的关系将野外信息分为四种类型,并将有用的信息归为可控和奖励相关的有用信息。该框架阐明了有关强化学习(RL)中的各种先前工作所删除的信息,并导致我们提出的学习方法,即学习一种已明确影响某些噪声分散注意器的DeNOCONE MDP。对DeepMind Control Suite和Robodesk的变体进行的广泛实验表明,我们的DeNocy World模型的表现优于仅使用原始观测值,并且超过了先前的工作,跨政策优化控制任务以及关节位置回归的非控制任务。
translated by 谷歌翻译
在本文中,我们介绍了联合主义者,这是一种能够感知的多仪器框架,能够转录,识别和识别和将多种乐器与音频剪辑分开。联合主义者由调节其他模块的仪器识别模块组成:输出仪器特异性钢琴卷的转录模块以及利用仪器信息和转录结果的源分离模块。仪器条件设计用于明确的多仪器功能,而转录和源分离模块之间的连接是为了更好地转录性能。我们具有挑战性的问题表述使该模型在现实世界中非常有用,因为现代流行音乐通常由多种乐器组成。但是,它的新颖性需要关于如何评估这种模型的新观点。在实验过程中,我们从各个方面评估了模型,为多仪器转录提供了新的评估观点。我们还认为,转录模型可以用作其他音乐分析任务的预处理模块。在几个下游任务的实验中,我们的转录模型提供的符号表示有助于解决降低检测,和弦识别和关键估计的频谱图。
translated by 谷歌翻译